Calculate lnRR and lnRR variance and prepare dataset for analysis.
Code
# calculate effect sizes and their variancesdt_all <-effect_lnRR(dat_all)# add effect size ID (obs_ID)dt_all$Obs_ID <-1:nrow(dt_all)# use vcalc to calculate variance-covariance matrixVCV <-vcalc(vi = lnRR_var, cluster = Cohort_ID,obs = Obs_ID,rho =0.5,data = dt_all)# convert pattern maximum diameter/length, area total area to logarithmdt_all$area_pattern_T <- dt_all$Area_pattern * dt_all$Number_patterndt_all$Log_diameter <-log(dt_all$Diameter_pattern)dt_all$Log_area <-log(dt_all$Area_pattern)dt_all$Log_background <-log(dt_all$Area_background)dt_all$Log_area_pattern_T <-log(dt_all$Area_pattern * dt_all$Number_pattern)dt_all <- dt_all %>%mutate_if(is.character, as.factor)
1.2.0.1 Discription of dataset
Code
meta <-read.csv(here("data/metadata.csv"), header = T, fileEncoding ="CP932")datatable(meta, extensions ="Buttons")
1.2.0.2 Dataset
Code
datatable(dt_all, extensions ="Buttons")
Table 1 - Data descriptions and dataset
Distribution of effect size and its variance
Code
hist(dt_all$lnRR)hist(dt_all$lnRR_var)
Figure 1— lnRR
Figure 2— lnRR variance
2 Let’s meta-analysis and meta-regressions
2.1 Meta-analysis: overall effect size
First, we run meta-analytical models to calculate total I2 (a measure of heterogeneity not caused by sampling error; Higgins. et al., 2003) and the partial I2 explained by each random factor.
Figure 3— Estimates of overall effect size and 95% confidence intervals
2.2 Meta-regressions: uni-moderator
Next, we performed uni-moderator meta-regression models with each of seven moderators: 1) treatment stimulus pattern types (eyespots vs. non-eyespots), 2) pattern area, 3) the number of pattern shapes, 4) prey material type, 5) maximum pattern diameter/length, 6) total pattern area, 7) total area of prey surface, and 8) prey shape type.
We used log-transformed data for pattern area, total pattern area, total area of prey surface, and pattern maximum diameter/length in our analysis to normalise these moderators.
bubble_plot(mr_num,mod ="Number_pattern",group ="Study_ID",xlab ="Number of patterns")
Figure 6— Effect size and number of patterns
Our dataset includes two material types of prey: real/imitation and artificial abstract butterflies. Is there a significant difference in effect size between the two stimuli types?
bubble_plot(mr_background,mod ="Log_background",group ="Study_ID",xlab ="Log-transformed total prey surface area")
Figure 9— Effect size and total prey surface area
Our dataset includes four types of prey type: real butterfly, abstract artificial butterfly, abstract artificial caterpillar, and abstract artificial prey. Is there a significant difference in effect size between these prey?
mat_ex <-cbind(contrMat(rep(1,length(mr_prey_shape$ci.ub)), type ="Tukey"))sig_test <-summary(glht(mr_prey_shape, linfct= mat_ex), test =adjusted("none"))sig_test
Simultaneous Tests for General Linear Hypotheses
Fit: rma.mv(yi = lnRR, V = VCV, mods = ~Shape_prey - 1, random = list(~1 |
Study_ID, ~1 | Obs_ID), data = dt_all, method = "REML", test = "t",
sparse = TRUE)
Linear Hypotheses:
Estimate Std. Error z value Pr(>|z|)
2 - 1 == 0 -0.25583 0.16444 -1.556 0.120
3 - 1 == 0 -0.31057 0.21242 -1.462 0.144
4 - 1 == 0 -0.09561 0.14577 -0.656 0.512
3 - 2 == 0 -0.05474 0.22317 -0.245 0.806
4 - 2 == 0 0.16022 0.16104 0.995 0.320
4 - 3 == 0 0.21496 0.20980 1.025 0.306
(Adjusted p values reported -- none method)
2.3 Correlation visualisation and choosing moderators
Before we run multi-moderator meta-regressions, we need to consider the correlation between moderators. Area, diameter and background seem to be correlated. Therefore, we visualised the correlation between these variables.
Figure 12— Correlation between categorical variables
We should not include “Shape_prey (prey material type)” and “Type_prey (prey shape type)” in the same model as they are correlated.
2.3.2 Choose moderators
We used model R2 values to find better models and moderator VIF values to check multicollinearity between moderators. Higher R2 indicates a better model, and VIF > 2 indicates multicollinearity.
It seems pattern area is a slightly better predictor than diameter.
2.4 Meta-regressions: multi-moderator
Third, We also ran a multi-moderator meta-regression model, including treatment stimulus pattern types, pattern area, the number of pattern shapes, and prey material type variables due to moderator correlations.
Finally, we tested for publication bias using a funnel plot and Egger’s test. We also checked the decline effect by adding the year of publication as a moderator to the meta-analysis model.
# extract the results of each model and combine them into one data frame for plottingmain <-mod_results(ma_all, group ="Study_ID")pattern <-mod_results(mr_eyespot, mod ="Treatment_stimulus", group ="Study_ID")shape <-mod_results(mr_prey_type, mod ="Type_prey", group ="Study_ID")comb <-submerge(main, pattern, shape)p1 <-orchard_plot(comb,group ="Study_ID",xlab ="log response ratio (lnRR)",tree.order =c( "Artificial", "Real","Non_eyespot", "Eyespot", "Intrcpt"),angle =45,branch.size =2) +theme(legend.direction ="horizontal",legend.text =element_text(size =6)) +scale_colour_brewer(palette ="Pastel1") +scale_fill_brewer(palette ="Pastel1") p1p2 <-orchard_plot(mr_prey_shape,mod ="Shape_prey",group ="Study_ID",xlab ="log response ratio (lnRR)",angle =45,legend.pos ="bottom.right") +theme(legend.direction ="horizontal",legend.text =element_text(size =6)) +scale_colour_brewer(palette ="YlGnBu") +scale_fill_brewer(palette ="YlGnBu")p2
Figure 18— main figure - discrete moderators
Figure 19— Prey type
Code
# these figs were created by multi meta-regression model results using the log-transformed areap3 <-bubble_plot(mr_area,mod ="Log_area",group ="Study_ID",xlab ="Log-transformed area",est.lwd =1.2,est.col ="deeppink3") +scale_x_continuous(breaks =seq(0,7,1.5)) +scale_y_continuous(breaks =seq(-3.0, 4.5, 1.0))# number of patternsp4 <-bubble_plot(mr_num,mod ="Number_pattern",group ="Study_ID",xlab ="Number of patterns",est.lwd =1.2,est.col ="turquoise4") +scale_x_continuous(breaks =seq(0,11,2)) +scale_y_continuous(breaks =seq(-3.0, 4.5, 1.0))# combinep3 / p4 +plot_annotation(tag_levels ="a")# total pattern areap5 <-bubble_plot(mr_area_T,mod ="Log_area_pattern_T",group ="Study_ID",xlab ="Log-transformed total pattern area")# diameterp6 <-bubble_plot(mr_diameter,mod ="Log_diameter",group ="Study_ID",xlab ="Log-transformed diameter")# backgroundp7 <-bubble_plot(mr_background,mod ="Log_background",group ="Study_ID",xlab ="Log-transformed total prey surface area")p5 / p6 /p7 +plot_annotation(tag_levels ="a")
Figure 20— main figure - continuous moderators
Figure 21— pattern area
Code
# funnel plot#pdf("funnel.pdf")funnel(ma_all, yaxis ="seinv", xlab ="Standarised residuals",ylab ="Precision (inverse of SE)",# xlim = c(-4.0, 4.5), # ylim = c(0.01, 60.0), col =c(alpha("black", 0.5)),cex =1.5) # pdf("funnel2.pdf", width = 15, height = 10)# dev.off()# Egger's test and decline effectp8 <-bubble_plot(bias_model,mod ="sqrt_inv_e_n",group ="Study_ID",xlab ="Square root of inverse of effective sample size") +scale_y_continuous(breaks =seq(-2.5, 4.5, 1.5)) p9 <-bubble_plot(year_model,mod ="Year",group ="Study_ID",xlab ="Year of publication") +scale_y_continuous(breaks =seq(-2.5, 4.0, 1.5)) # combine plots created by bubble_plot()pub <- p8 / p9pub +plot_annotation(tag_levels ='a')
Figure 22— Funnel plot
Figure 23— Egger’s test and Decline effect
Code
p11 <-bubble_plot(multi_bias,mod ="sqrt_inv_e_n",group ="Study_ID",xlab ="Square root of inverse of effective sample size")p12 <-bubble_plot(multi_bias,mod ="Year",group ="Study_ID",xlab ="Year of publication")p11 / p12 +plot_annotation(tag_levels ='a')
Figure 24— Egger’s test and Decline effect (multi-moderator)
trees <-read.nexus(here("data/bird_phy.nex"))plot(trees[1])
Figure 25— Phylogenetic tree of bird species
We tested whether phylogeny should be included in our analyses.
Code
# data dat_predator <-filter(dat_all, Dataset =="predator")# check if the tree is ultrametric# is.ultrametric(trees[[1]])# [1] TRUE# example of variance-covariance matrixvcv(trees[[1]], corr = T)
# calculate lnRR and lnRR_vardat_pred <-effect_lnRR(dat_predator)dat_pred$Obs_ID <-1:nrow(dat_pred)V <-vcalc(lnRR_var, cluster = Cohort_ID, subgroup = Obs_ID, data = dat_pred)# create function to run meta-analysis for 50 treesphy_model <-function(cor_tree = vcv_tree){ model <-rma.mv(yi = lnRR,V = V,random =list(~1| Study_ID,~1| Shared_control_ID,~1| Cohort_ID,~1| Obs_ID,~1| Bird_species,~1| Phylogeny),R =list(Phylogeny = cor_tree),test ="t",method ="REML",sparse =TRUE,data = dat_pred) model}# running 50 meta-analyses with 50 different trees# We got 1000 phylogenetic trees from BirdTree.org (https://birdtree.org)# Only 50 trees are used as in Nakagawa & Villemereuil (2019)(https://doi.org/10.1093/sysbio/syy089)tree_50 <- trees[1:50]vcv_tree_50 <-map(tree_50, ~vcv(.x, corr =TRUE))ma_50 <-mclapply(vcv_tree_50, phy_model) # detectCores() = 12# always best to save RDS files for analysis # which takes more than a min or so # saveRDS(ma_50, here("Rdata", "ma_50.RDS")) ma_50 <-readRDS(here("Rdata", "ma_50.RDS"))# combining the resultsest_50 <-map_dbl(ma_50, ~ .x$b[[1]])se_50 <-map_dbl(ma_50, ~ .x$se)df_50 <-c(rbind(est_50, se_50))# creating an array required by pool.mimy_array <-array(df_50, dim =c(1, 2, 50))pool.mi(my_array)